192 research outputs found

    An Improved Distributed Algorithm for Maximal Independent Set

    Full text link
    The Maximal Independent Set (MIS) problem is one of the basics in the study of locality in distributed graph algorithms. This paper presents an extremely simple randomized algorithm providing a near-optimal local complexity for this problem, which incidentally, when combined with some recent techniques, also leads to a near-optimal global complexity. Classical algorithms of Luby [STOC'85] and Alon, Babai and Itai [JALG'86] provide the global complexity guarantee that, with high probability, all nodes terminate after O(logn)O(\log n) rounds. In contrast, our initial focus is on the local complexity, and our main contribution is to provide a very simple algorithm guaranteeing that each particular node vv terminates after O(logdeg(v)+log1/ϵ)O(\log \mathsf{deg}(v)+\log 1/\epsilon) rounds, with probability at least 1ϵ1-\epsilon. The guarantee holds even if the randomness outside 22-hops neighborhood of vv is determined adversarially. This degree-dependency is optimal, due to a lower bound of Kuhn, Moscibroda, and Wattenhofer [PODC'04]. Interestingly, this local complexity smoothly transitions to a global complexity: by adding techniques of Barenboim, Elkin, Pettie, and Schneider [FOCS'12, arXiv: 1202.1983v3], we get a randomized MIS algorithm with a high probability global complexity of O(logΔ)+2O(loglogn)O(\log \Delta) + 2^{O(\sqrt{\log \log n})}, where Δ\Delta denotes the maximum degree. This improves over the O(log2Δ)+2O(loglogn)O(\log^2 \Delta) + 2^{O(\sqrt{\log \log n})} result of Barenboim et al., and gets close to the Ω(min{logΔ,logn})\Omega(\min\{\log \Delta, \sqrt{\log n}\}) lower bound of Kuhn et al. Corollaries include improved algorithms for MIS in graphs of upper-bounded arboricity, or lower-bounded girth, for Ruling Sets, for MIS in the Local Computation Algorithms (LCA) model, and a faster distributed algorithm for the Lov\'asz Local Lemma

    Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding

    Full text link
    We study coding schemes for error correction in interactive communications. Such interactive coding schemes simulate any nn-round interactive protocol using NN rounds over an adversarial channel that corrupts up to ρN\rho N transmissions. Important performance measures for a coding scheme are its maximum tolerable error rate ρ\rho, communication complexity NN, and computational complexity. We give the first coding scheme for the standard setting which performs optimally in all three measures: Our randomized non-adaptive coding scheme has a near-linear computational complexity and tolerates any error rate δ<1/4\delta < 1/4 with a linear N=Θ(n)N = \Theta(n) communication complexity. This improves over prior results which each performed well in two of these measures. We also give results for other settings of interest, namely, the first computationally and communication efficient schemes that tolerate ρ<27\rho < \frac{2}{7} adaptively, ρ<13\rho < \frac{1}{3} if only one party is required to decode, and ρ<12\rho < \frac{1}{2} if list decoding is allowed. These are the optimal tolerable error rates for the respective settings. These coding schemes also have near linear computational and communication complexity. These results are obtained via two techniques: We give a general black-box reduction which reduces unique decoding, in various settings, to list decoding. We also show how to boost the computational and communication efficiency of any list decoder to become near linear.Comment: preliminary versio

    Optimal Error Rates for Interactive Coding I: Adaptivity and Other Settings

    Full text link
    We consider the task of interactive communication in the presence of adversarial errors and present tight bounds on the tolerable error-rates in a number of different settings. Most significantly, we explore adaptive interactive communication where the communicating parties decide who should speak next based on the history of the interaction. Braverman and Rao [STOC'11] show that non-adaptively one can code for any constant error rate below 1/4 but not more. They asked whether this bound could be improved using adaptivity. We answer this open question in the affirmative (with a slightly different collection of resources): Our adaptive coding scheme tolerates any error rate below 2/7 and we show that tolerating a higher error rate is impossible. We also show that in the setting of Franklin et al. [CRYPTO'13], where parties share randomness not known to the adversary, adaptivity increases the tolerable error rate from 1/2 to 2/3. For list-decodable interactive communications, where each party outputs a constant size list of possible outcomes, the tight tolerable error rate is 1/2. Our negative results hold even if the communication and computation are unbounded, whereas for our positive results communication and computation are polynomially bounded. Most prior work considered coding schemes with linear amount of communication, while allowing unbounded computations. We argue that studying tolerable error rates in this relaxed context helps to identify a setting's intrinsic optimal error rate. We set forward a strong working hypothesis which stipulates that for any setting the maximum tolerable error rate is independent of many computational and communication complexity measures. We believe this hypothesis to be a powerful guideline for the design of simple, natural, and efficient coding schemes and for understanding the (im)possibilities of coding for interactive communications

    Deterministic Distributed Edge-Coloring via Hypergraph Maximal Matching

    Full text link
    We present a deterministic distributed algorithm that computes a (2Δ1)(2\Delta-1)-edge-coloring, or even list-edge-coloring, in any nn-node graph with maximum degree Δ\Delta, in O(log7Δlogn)O(\log^7 \Delta \log n) rounds. This answers one of the long-standing open questions of \emph{distributed graph algorithms} from the late 1980s, which asked for a polylogarithmic-time algorithm. See, e.g., Open Problem 4 in the Distributed Graph Coloring book of Barenboim and Elkin. The previous best round complexities were 2O(logn)2^{O(\sqrt{\log n})} by Panconesi and Srinivasan [STOC'92] and O~(Δ)+O(logn)\tilde{O}(\sqrt{\Delta}) + O(\log^* n) by Fraigniaud, Heinrich, and Kosowski [FOCS'16]. A corollary of our deterministic list-edge-coloring also improves the randomized complexity of (2Δ1)(2\Delta-1)-edge-coloring to poly(loglogn)(\log\log n) rounds. The key technical ingredient is a deterministic distributed algorithm for \emph{hypergraph maximal matching}, which we believe will be of interest beyond this result. In any hypergraph of rank rr --- where each hyperedge has at most rr vertices --- with nn nodes and maximum degree Δ\Delta, this algorithm computes a maximal matching in O(r5log6+logrΔlogn)O(r^5 \log^{6+\log r } \Delta \log n) rounds. This hypergraph matching algorithm and its extensions lead to a number of other results. In particular, a polylogarithmic-time deterministic distributed maximal independent set algorithm for graphs with bounded neighborhood independence, hence answering Open Problem 5 of Barenboim and Elkin's book, a ((logΔ/ε)O(log(1/ε)))((\log \Delta/\varepsilon)^{O(\log (1/\varepsilon))})-round deterministic algorithm for (1+ε)(1+\varepsilon)-approximation of maximum matching, and a quasi-polylogarithmic-time deterministic distributed algorithm for orienting λ\lambda-arboricity graphs with out-degree at most (1+ε)λ(1+\varepsilon)\lambda, for any constant ε>0\varepsilon>0, hence partially answering Open Problem 10 of Barenboim and Elkin's book

    Bounds on contention management in radio networks

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2013.Cataloged from PDF version of thesis.Includes bibliographical references (p. 79-82).In this thesis, we study the local broadcast problem in two well-studied wireless network models. The local broadcast problem is a theoretical approach for capturing the contention management issue in wireless networks; it assumes that processes are provided messages, one by one, that must be delivered to their neighbors. We study this problem in two theoretical models of wireless networks, the classical radio network model and its more recent generalization, the dual graph model which includes the possibility of unreliable time-changing links. Both these models are synchronous; the execution proceeds in lock-step rounds and in each round, each node either transmits a message or listens. In each round of the dual graph model, each unreliable link might be active or inactive, whereas in the classical model, all the links are always active. In each round, each node receives a message if and only if it is listening and exactly one of its neighbors, with respect to the the active links of that round, transmits. The time complexity of the local broadcast algorithms is measured by two bounds, the acknowledgment bound and the progress bound. Roughly speaking, the former bounds the time it takes each broadcasting node to deliver its message to all its neighbors and the latter bounds the time it takes a node to receive at least one message, assuming it has a broadcasting neighbor. Typically these bounds depend on the maximum contention and the network size. The standard local broadcast strategy is the Decay protocol introduced by Bar-Yehuda et al. [19] in 1987. During the 25-years period in which this strategy has been used, it has remained an open question whether it is optimal. In this paper, we resolve this long-standing question. We present lower bounds on progress and acknowledgment bounds in both the classical and the dual graph model and we show that, with a slight optimization, the Decay protocol matches these lower bounds in both models. However, the tight progress bound of the dual graph model is exponentially larger than the progress bound in the classical model, in its dependence on the maximum contention. This establishes a separation between the two models, proving that progress in the dual graph model is strictly and exponentially harder than its classical predecessor. Combined, our results provide an essentially complete characterization of the local broadcast problem in these two important models.by Mohsen Ghaffari.S.M
    corecore